natural extension
- Asia > Middle East > Israel > Tel Aviv District > Tel Aviv (0.04)
- North America > Canada > British Columbia > Metro Vancouver Regional District > Vancouver (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Europe > Netherlands > North Holland > Amsterdam (0.04)
Can We Measure the Impact of a Database?
This is undoubtedly the case for scientific and statistical databases, which have largely replaced traditional reference works. Database and Web technologies have led to an explosion in the number of databases that support scientific research, for obvious reasons: Databases provide faster communication of knowledge, hold larger volumes of data, are more easily searched, and are both human- and machine-readable. Moreover, they can be developed rapidly and collaboratively by a mixture of researchers and curators. For example, more than 1,500 curated databases are relevant to molecular biology alone.10 The value of these databases lies not only in the data they present but also in how they organize that data.
Reviews: Regret Bounds for Learning State Representations in Reinforcement Learning
This paper proposes a natural extension of UCRL2 to learning state representations. The proposed algorithm chooses optimistically over a finite set of candidate MDPs and their corresponding policies. The algorithm is analyzed and improves over existing regret bounds. The paper was discussed and all reviewers agree that this is a natural extension of UCRL2 that deserves to be published.
Extending choice assessments to choice functions: An algorithm for computing the natural extension
Decadt, Arne, Erreygers, Alexander, De Bock, Jasper
This leads to a single optimal decision, or a set of optimal decisions all of which are equivalent. In the theory of imprecise probabilities, where multiple probabilistic models are considered simultaneously, this decision rule can be generalised in multiple ways; Troffaes [1] provides a nice overview. A typical feature of the resulting decision rules is that they will not always yield a single optimal decision, as a decision that is optimal in one probability model may for example be suboptimal in another. We here take this generalisation yet another step further by adopting the theory of choice functions: a mathematical framework for decision-making that incorporates several (imprecise) decision rules as special cases, including the classical approach of maximising expected utility [2, 3, 4]. An important feature of this framework of choice functions is that it allows one to impose axioms directly on the decisions that are represented by such a choice function [3, 4, 5].
Results about sets of desirable gamble sets
Coherent sets of desirable gamble sets is used as a model for representing an agents opinions and choice preferences under uncertainty. In this paper we provide some results about the axioms required for coherence and the natural extension of a given set of desirable gamble sets. We also show that coherent sets of desirable gamble sets can be represented by a proper filter of coherent sets of desirable gambles. This paper was primarily written in 2021, overlapping with my finishing up Campbell-Moore (2021). There is some overlap between this paper and de Cooman et al. (2023); the results of this paper were shown independently.
Unbiased Weight Maximization
A biologically plausible method for training an Artificial Neural Network (ANN) involves treating each unit as a stochastic Reinforcement Learning (RL) agent, thereby considering the network as a team of agents. Consequently, all units can learn via REINFORCE, a local learning rule modulated by a global reward signal, which aligns more closely with biologically observed forms of synaptic plasticity. Nevertheless, this learning method is often slow and scales poorly with network size due to inefficient structural credit assignment, since a single reward signal is broadcast to all units without considering individual contributions. Weight Maximization, a proposed solution, replaces a unit's reward signal with the norm of its outgoing weight, thereby allowing each hidden unit to maximize the norm of the outgoing weight instead of the global reward signal. In this research report, we analyze the theoretical properties of Weight Maximization and propose a variant, Unbiased Weight Maximization. This new approach provides an unbiased learning rule that increases learning speed and improves asymptotic performance. Notably, to our knowledge, this is the first learning rule for a network of Bernoulli-logistic units that is unbiased and scales well with the number of network's units in terms of learning speed.
Iterative Double Clustering for Unsupervised and Semi-Supervised Learning
We present a powerful meta-clustering technique called Iterative Dou- ble Clustering (IDC). The IDC method is a natural extension of the recent Double Clustering (DC) method of Slonim and Tishby that ex- hibited impressive performance on text categorization tasks [12]. Us- ing synthetically generated data we empirically flnd that whenever the DC procedure is successful in recovering some of the structure hidden in the data, the extended IDC procedure can incrementally compute a signiflcantly more accurate classiflcation. IDC is especially advan- tageous when the data exhibits high attribute noise. Our simulation results also show the efiectiveness of IDC in text categorization prob- lems.
The Ultimate Toolbox Of ML Startups
Setting up a good tool stack for your Machine Learning team is important to work efficiently and be able to focus on delivering results. If you work at a startup you know that setting up an environment that can grow with your team, needs of the users and rapidly evolving ML landscape is especially important. We wondered: "What are the best tools, libraries and frameworks that ML startups use?" to tackle this challenge. And to answer that question we asked 41 Machine Learning startups from all over the world. Read on to figure out what will work for your machine learning team.
Inference with Choice Functions Made Practical
Decadt, Arne, De Bock, Jasper, de Cooman, Gert
We study how to infer new choices from previous choices in a conservative manner. To make such inferences, we use the theory of choice functions: a unifying mathematical framework for conservative decision making that allows one to impose axioms directly on the represented decisions. We here adopt the coherence axioms of De Bock and De Cooman (2019). We show how to naturally extend any given choice assessment to such a coherent choice function, whenever possible, and use this natural extension to make new choices. We present a practical algorithm to compute this natural extension and provide several methods that can be used to improve its scalability.
AI bot predicts World Series winners
America has been glued to their TV screens since the MLB playoffs began on October 1. As the field has whittled down to just four teams, odds makers are eager to figure out which team has the edge. Researchers at DataRobot thought it would be a fun exercise to pull all of the MLB data from the last few decades and have their AI figure out who will win the 2019 World Series. SEE: Artificial intelligence: A business leader's guide (free PDF) (TechRepublic Premium) At the start of the playoffs, the AI predicted the Los Angeles Dodgers were most likely to win the pennant, followed closely by the Houston Astros. In the American League, DataRobot's AI said the Houston Astros had a 40% probability of winning the American League, followed by the New York Yankees at 25% and Minnesota Twins at 18%.
- North America > United States > California > Los Angeles County > Los Angeles (0.28)
- North America > United States > New York (0.25)
- North America > United States > Minnesota (0.25)
- Europe > United Kingdom > England > Greater London > London > Wimbledon (0.05)